Goto

Collaborating Authors

 Dobele Municipality


Understanding Health Misinformation Transmission: An Interpretable Deep Learning Approach to Manage Infodemics

Xie, Jiaheng, Chai, Yidong, Liu, Xiao

arXiv.org Artificial Intelligence

Health misinformation on social media devastates physical and mental health, invalidates health gains, and potentially costs lives. Understanding how health misinformation is transmitted is an urgent goal for researchers, social media platforms, health sectors, and policymakers to mitigate those ramifications. Deep learning methods have been deployed to predict the spread of misinformation. While achieving the state-of-the-art predictive performance, deep learning methods lack the interpretability due to their blackbox nature. To remedy this gap, this study proposes a novel interpretable deep learning approach, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD), to predict health misinformation transmission in social media. Improving upon state-of-the-art interpretable methods, GAN-PiWAD captures the interactions among multi-modal data, offers unbiased estimation of the total effect of each feature, and models the dynamic total effect of each feature when its value varies. We select features according to social exchange theory and evaluate GAN-PiWAD on 4,445 misinformation videos. The proposed approach outperformed strong benchmarks. Interpretation of GAN-PiWAD indicates video description, negative video content, and channel credibility are key features that drive viral transmission of misinformation. This study contributes to IS with a novel interpretable deep learning method that is generalizable to understand other human decision factors. Our findings provide direct implications for social media platforms and policymakers to design proactive interventions to identify misinformation, control transmissions, and manage infodemics.


Qualit\"atsma{\ss}e bin\"arer Klassifikationen im Bereich kriminalprognostischer Instrumente der vierten Generation

Krafft, Tobias D.

arXiv.org Machine Learning

This master's thesis discusses an important issue regarding how algorithmic decision making (ADM) is used in crime forecasting. In America forecasting tools are widely used by judiciary systems for making decisions about risk offenders based on criminal justice for risk offenders. By making use of such tools, the judiciary relies on ADM in order to make error free judgement on offenders. For this purpose, one of the quality measures for machine learning techniques which is widly used, the $AUC$ (area under curve), is compared to and contrasted for results with the $PPV_k$ (positive predictive value). Keeping in view the criticality of judgement along with a high dependency on tools offering ADM, it is necessary to evaluate risk tools that aid in decision making based on algorithms. In this methodology, such an evaluation is conducted by implementing a common machine learning approach called binary classifier, as it determines the binary outcome of the underlying juristic question. This thesis showed that the $PPV_k$ (positive predictive value) technique models the decision of judges much better than the $AUC$. Therefore, this research has investigated whether there exists a classifier for which the $PPV_k$ deviates from $AUC$ by a large proportion. It could be shown that the deviation can rise up to 0.75. In order to test this deviation on an already in used Classifier, data from the fourth generation risk assement tool COMPAS was used. The result were were quite alarming as the two measures derivate from each other by 0.48. In this study, the risk assessment evaluation of the forecasting tools was successfully conducted, carefully reviewed and examined. Additionally, it is also discussed whether such systems used for the purpose of making decisions should be socially accepted or not.